Goto

Collaborating Authors

 permutation matrix



Appendix: Permutation-InvariantVariationalAutoencoderfor Graph-LevelRepresentationLearning

Neural Information Processing Systems

Remark Since we apply the row-wise softmax in Eq. (7), P jpij = 1 i and pij 0 (i,j) is alwaysfulfilled.If C(P)=0,allbutoneentryinacolumn pi, are0andtheotherentryis1. Hence,P ipij = 1 j isfulfilled. Synthetic random graph generation To generate train and test graph datasets we utilized the pythonpackage NetworkX[1]. Ego graphs extracted from Binominal graphs (p (0.2,0.6))selecting all neighbours of onerandomnode. Training Details We did not perform an extensive hyperparameter evaluation for the different experiments and mostly followed [2]for hyperparameter selection. We set the graph embedding dimension to 64.


Permutation-InvariantVariationalAutoencoderfor Graph-LevelRepresentationLearning

Neural Information Processing Systems

Most work, however, focuses on either node-or graph-level supervised learning, such as node, link or graph classification or node-level unsupervised learning (e.g., node clustering). Despite its wide range of possible applications, graph-level unsupervised representation learning has not received much attention yet. This might be mainly attributed to the high representation complexity ofgraphs, which can berepresented byn!equivalent adjacencymatrices, where n is the number of nodes. In this work we address this issue by proposing a permutation-invariant variational autoencoder for graph structured data.